49 research outputs found

    Computational Evidence for Laboratory Diagnostic Pathways: Extracting Predictive Analytes for Myocardial Ischemia from Routine Hospital Data

    Get PDF
    Background: Laboratory parameters are critical parts of many diagnostic pathways, mortality scores, patient follow-ups, and overall patient care, and should therefore have underlying standardized, evidence-based recommendations. Currently, laboratory parameters and their significance are treated differently depending on expert opinions, clinical environment, and varying hospital guidelines. In our study, we aimed to demonstrate the capability of a set of algorithms to identify predictive analytes for a specific diagnosis. As an illustration of our proposed methodology, we examined the analytes associated with myocardial ischemia; it was a well-researched diagnosis and provides a substrate for comparison. We intend to present a toolset that will boost the evolution of evidence-based laboratory diagnostics and, therefore, improve patient care. Methods: The data we used consisted of preexisting, anonymized recordings from the emergency ward involving all patient cases with a measured value for troponin T. We used multiple imputation technique, orthogonal data augmentation, and Bayesian Model Averaging to create predictive models for myocardial ischemia. Each model incorporated different analytes as cofactors. In examining these models further, we could then conclude the predictive importance of each analyte in question. Results: The used algorithms extracted troponin T as a highly predictive analyte for myocardial ischemia. As this is a known relationship, we saw the predictive importance of troponin T as a proof of concept, suggesting a functioning method. Additionally, we could demonstrate the algorithm’s capabilities to extract known risk factors of myocardial ischemia from the data. Conclusion: In this pilot study, we chose an assembly of algorithms to analyze the value of analytes in predicting myocardial ischemia. By providing reliable correlations between the analytes and the diagnosis of myocardial ischemia, we demonstrated the possibilities to create unbiased computational-based guidelines for laboratory diagnostics by using computational power in today’s era of digitalization

    P3Depth: Monocular Depth Estimation with a Piecewise Planarity Prior

    Full text link
    Monocular depth estimation is vital for scene understanding and downstream tasks. We focus on the supervised setup, in which ground-truth depth is available only at training time. Based on knowledge about the high regularity of real 3D scenes, we propose a method that learns to selectively leverage information from coplanar pixels to improve the predicted depth. In particular, we introduce a piecewise planarity prior which states that for each pixel, there is a seed pixel which shares the same planar 3D surface with the former. Motivated by this prior, we design a network with two heads. The first head outputs pixel-level plane coefficients, while the second one outputs a dense offset vector field that identifies the positions of seed pixels. The plane coefficients of seed pixels are then used to predict depth at each position. The resulting prediction is adaptively fused with the initial prediction from the first head via a learned confidence to account for potential deviations from precise local planarity. The entire architecture is trained end-to-end thanks to the differentiability of the proposed modules and it learns to predict regular depth maps, with sharp edges at occlusion boundaries. An extensive evaluation of our method shows that we set the new state of the art in supervised monocular depth estimation, surpassing prior methods on NYU Depth-v2 and on the Garg split of KITTI. Our method delivers depth maps that yield plausible 3D reconstructions of the input scenes. Code is available at: https://github.com/SysCV/P3DepthComment: Accepted at CVPR 202

    Quantifying Data Augmentation for LiDAR based 3D Object Detection

    Full text link
    In this work, we shed light on different data augmentation techniques commonly used in Light Detection and Ranging (LiDAR) based 3D Object Detection. We, therefore, utilize a state of the art voxel-based 3D Object Detection pipeline called PointPillars and carry out our experiments on the well established KITTI dataset. We investigate a variety of global and local augmentation techniques, where global augmentation techniques are applied to the entire point cloud of a scene and local augmentation techniques are only applied to points belonging to individual objects in the scene. Our findings show that both types of data augmentation can lead to performance increases, but it also turns out, that some augmentation techniques, such as individual object translation, for example, can be counterproductive and can hurt overall performance. We show that when we apply our findings to the data augmentation policy of PointPillars we can easily increase its performance by up to 2%. In order to provide reproducibility, our code will be publicly available at www.trace.ethz.ch/3D_Object_Detection

    Competitive Policy Optimization

    Get PDF
    A core challenge in policy optimization in competitive Markov decision processes is the design of efficient optimization methods with desirable convergence and stability properties. To tackle this, we propose competitive policy optimization (CoPO), a novel policy gradient approach that exploits the game-theoretic nature of competitive games to derive policy updates. Motivated by the competitive gradient optimization method, we derive a bilinear approximation of the game objective. In contrast, off-the-shelf policy gradient methods utilize only linear approximations, and hence do not capture interactions among the players. We instantiate CoPO in two ways:(i) competitive policy gradient, and (ii) trust-region competitive policy optimization. We theoretically study these methods, and empirically investigate their behavior on a set of comprehensive, yet challenging, competitive games. We observe that they provide stable optimization, convergence to sophisticated strategies, and higher scores when played against baseline policy gradient methods
    corecore